I never understood how to use Docker, what makes it so special? I would really like to use it on my Rapsberry Pi 3 Model B+ to ease the setup process of selfhosting different things.
I’m currently running these things without Docker:
- Mumble server with a Discord bridge and a music bot
- Maubot, a plugin-based Matrix bot
- FTP server
- Two Discord Music bots
All of these things are running as systemd services in the background. Should I change this? A lot of the things I’m hosting offer Docker images.
It would also be great if someone could give me a quick-start guide for Docker. Thanks in advance!
Docker is amazing but not needed. You can compare it to a simpler VM. You can take a docker and run it on any machine. You have an environment that is separate from your host and you and the container can only access it via defined points (volumes and ports).
Imagine you need to run a 2nd Mumble Server. I never set on up but its often that a 2nd instance is not that easy. With docker its easy. The only difference is that you need to use different ports, when you have only one network access or you use a reverse proxy. You can create a 2nd instance to test stuff, without interrupting your productive system. Its a security benefit, because its isolated to some degree and you can remove one easily.
I started using it with MSSQL Server, because I hated how invasive it is on a windows machine, especially I just needed it temporarily to do stuff with it. I’m not a microsoft admin and I know that Servers from Microsoft are a different level. Docker allowed me to start and stop it and remove it very easily. After that I started using it for a lot of and brought my NAS on the next level.
Also one thing worth mentioning are Linux Containerx (LXC). They are in Proxmox but I have less knowledge. It feels more like a full VM than docker but uses less resources. This is the reason why containers in general are more popular. They are less resource hungry than a full VM but have some benefits than running everything on one machine. LXC feels more like a full system, than docker. With docker you rarely get into the system. You may execute some commands, like a create user command or a one time job but don’t access it via a shell from the inside (its possible). LXC on the other hand, you use the shell.
I started self-hosting a bit prior to when Docker took off, and getting multiple services running was much harder. Service A wants a certain version of PHP installed with certain plugins while Service B wants a different version. You’d follow a tutorial for installing Service C and desperately hope that it wouldn’t somehow break Service A or B. You installed Service D for a bit despite all the installation pain and now want to uninstall it - I hope you tracked exactly what config changes you made throughout the system so you can undo it.
Docker fixed all of this by making each service independent through containers which made self-hosting 10x easier. I’d also add that I love how easy it is to transfer my setup to a new server - I keep all of my container volumes in a specific directory and my docker-compose files in another and that’s all I need to backup / transfer. Without Docker you’d have to specifically handle each & every configuration file and database location, and if you later upgrade to a newer version of the OS or a different distro you’d have to handle possible conflicts between your versions and what the distro expects.
Recent video that explains Docker very well: https://www.youtube.com/watch?v=rIrNIzy6U_g
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=rIrNIzy6U_g
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
This blog post explains it well:
https://cosmicbyt.es/posts/demistifying-containers-part-1/
Essentially, containers are means of creating environments in which you can run software, and those environments are:
- isolated, which makes it a very controlled environment. Much harder to run into errors
- reproducible: we have tools that reproduce the same container from an image file
- easy to distribute: just have the container image.
- little to no compromises on performance (at least on Linux)
It is essentially a way for you to run a program without having to worry how to set up the environment, why it didn’t work as expected, what dependencies you’re missing, etc.
There have been some great answers on this so far, but I want to highlight my favourite part of Docker: the disposability.
When you have a running Docker container, you can hop in, fuck about with files, break stuff as you try to figure something out, and then kill the container and all of the mess you’ve created is gone. Now tweak your config and spin up a fresh one exactly the way you need it.
You’ve been running a service for 6 months and there’s a new upgrade. Delete your instance and just start up the new one. Worried that there might be some cruft left over from before? Don’t be! Every new instance is a clean slate. Regular, reproducible deployments are the norm now.
As a developer it’s even better: the thing you develop locally is identical to the thing that’s built, tested, and deployed in CI.
I <3 Docker!
What about your preferences/configs/files (when you spun up a fresh one)?
The most popular way of configuring containers are by using environment variables that live outside the container. But for apps that use files to store configuration, you can designate directories on your host that will be available inside the container (called “volumes” in Docker land). It’s also possible to link multiple containers together, so you can have a database container running alongside the app.
If you have all of that set up then, what benefit is there to blowing away your container and spinning up a ‘fresh’ one? I’ve never been able to wrap my head around docker, and I think this is a big part of it.
There’s a lot more to an application than its configuration. It may require certain specific system libraries, need a certain way of starting up, or a whole host of other special things. With a container, the app dev can precreate a perfect environment for their program and save you LOADS of hassle trying to set it up.
The benefit of all this is that you can know exactly where application state is stored, know that you’re running the app in it’s right environment, and it becomes turbo easy to install updates, or roll back if needed.
Totally spin up a VM, install docker on it, and deploy 2-3 web apps. You’ll notice that you use the same way of configuring them, starting and stopping them, and you might not want to look back ;)
I’ve played with it a bit. I think I was using something called DockStarter and Portainer. Like I said though, I could never quite grasp what was going on. Now for my home webapps I use Yunohost, and for my media server I use Swizzin CE. I’ve found these to be a lot easier, but I will try Docker again sometime.
Docker makes sense if you are deploying thousands of machines in the cloud. I don’t think it makes as much sense if you have your own hardware.
Some services do have 1-line installers with docker, so those might be useful. But they usually have 1-line non-docker installers too.
Docker still makes sense on your own hardware. Especially if you’re the type of person to try out different programs often
If you’re already using systemd, do not switch to Docker. Use Podman instead. Docker runs all your services under the Docker service. Podman can both run the same containers as systemctl services.
I used to run systemd units that just start docker-compose files, that’s also a thing, I suppose. Also generally it’s easy to manage the container directly (killing/restarting) without the needed lifecycle a systemd unit gives, I would say.
Quadlets with podman have completely replaced compose files for me. I use the kuberentes configs. Then I run a tailscale container in the pod and BAM, all of my computers can access that service without have to expose any ports.
Then I have an ansible playbook to log in to the host and start a detached tmux session so my user systemd services keep running. Its all rootless, and just so dang easy.
deleted by creator
Try to run something that requires php7 and something else that requires php8 on the same web server; or python 2 and python 3.
You actually can, but it’s not pretty.
(The thing about a declarative setup isn’t much of a difference, you can do it for any popular Linux distro.)
Doesn’t that mean that docker containers use up much more resources since you’re installing numerous instances & versions of each program like PHP?
Oh, sure, the bloat on your images requires resources from the host.
There is the option of sharing things. But, obviously that conflicts a bit with maintaining your environments isolated.
Dockers documentation is actually pretty good, I’d recommend taking a look at it because it’s written really well and can be used as a decent primer on learning to read documentation.
I would recommend learning docker / containerization. For your use case you likely won’t see a big benefit HOWEVER it is a good technology to know.
As far as the “why” you’d use it there are too many to list but for your use case the why I’d argue is “just so you know how to do it” and you’ll come up with your own why along the way.
Simplest why beyond “it’s a good technology to know” is that updating an app is as simple as pulling a new container and relaunching it.
The thing that confused me when first learning about docker was, that everybody compares it to a virtual machine. It’s not. Containers dont virtualize anything. They take a (single) process from the host OS and separate that into its own environment. All system calls, memory access, file writes etc are still handled by the same os (same kernel). However the process is separated both on the file system and process level. It can’t see other processes outside of the container and it also doesn’t see the real filesystem. It sees a filesystem provided by the container. This also means it sees different file and user permissions. When you run a alpine Linux docker container on an Ubuntu system, the container only containes the (few) files for alpine but no Linux kernel no desktop environment. A process inside that container only sees the alpine files and not the Ubuntu files. It also means all containers see a filesystem independent of each other and can use libraries and dependencies of different versions (they are only files after all).
For administration it makes running complex services easy. You define how to setup that service (what base Linux distro to use, what packages to install, what commands to run, and how to start the process). You can then be save to assume the setup of that service did not interfere with the setup of any other service. “Service 1 needs a certain system wide config changed? Service 2 needs that config in the default state? And both need a different version of the same library?” In containers you can have all at the same time because they each see a different version of the same config and library.
And all this is provided by the kernel itself. All docker does is provide an “easy” way to create and manage containers but could could do all of that using chroot, runc and a few other.
As a note, containers usually don’t come with systemd as they don’t need an init system. You would run the service directly inside the container and then use systemd outside the container to make sure the container is started/restarted, or just docker as it can already do that.
I found a great article demystifying containers recently
While you are technically right there is very little logical difference between containers and VMs. Really the only fundamental difference is that containers use the same kernel while VMs run their own. (let’s not even worry about para-virtualization right now).
In practice I would say the biggest difference is that there is better memory sharing so total memory usage will often be less. But honestly this mostly comes down to the fact that the average container bundles less software than the average VM image. Easier management of volumes is also nice because typically you will just bind-mount a host directory, but it also isn’t hard to mount a block device on a Linux host.
Docker of one version of software that uses Linux containers to encapsulate software and that software’s dependencies, while limiting that software’s access to the underlying OS. It’s chroot, but for more of the system. It can make running software that has a lot of moving parts and dependencies easier. It can also improve your security running that software.
For how-tos, watch one of the 875,936 YouTube tutorials, or read one of the 3 million text tutorials. Or ask ChatGPT, if you really need hand-holding.
For your use case, consider it to be a packaging format (like AppImage, Flatpak, Deb, RPM, etc.) that includes all the dependencies (including services, not just libraries) for the app in question.
Should I change this?
If it’s not broken don’t fix it.
Use Podman (my preferred - the SystemD approach is awesome), containerd, or Incus. Docker is a graveyard of half-finished pet projects that have no reason for existing. Podman has a Docker-compatible socket, so 100% of Docker tooling will work with it.
I can add, podman was ignored in previous years at my day job because there were some reliability issues either with GPU access or networking I forget, however these issues have been resolved and we’re reimplementing it pretty much effortlessly
Yep, we’re reconsidering it at work as well. it’s grown pretty nicely
I’ve used Docker a fair bit over the years because it’s a simple line of code I can copy/paste to get a simple web server running.
I ran Home Assistant Supervised in Docker for many years. It was a few lines of code and then I basically had Home Assistant OS running on my Pi without it taking over the whole Pi, meaning I could run other things on it too.
That ended when HA just died one day and I had no clue how to get it running again. I spent a day trying, then just installed HA OS on the Pi instead.
Anyway I now have a Dell Optiplex and Proxmox and I’ve gone back to Docker. Why? Well I discovered that I could make a Linux VM and install Docker on it, then add the Docker code to install a Portainer client to it, then make that into a template.
Meaning I can clone that template and type the IP address into Portainer and now I have full access to that Docker instance from my original Portainer container. That means I can bang a Docker Compose file into the “Stack” and press go, then tinker with the thing I wanna tinker with. If I get it working it can stay, if I don’t then I just delete the VM and I’ve lost nothing.
Portainer has made Docker way more accessible for me. I love a webui
I use proxmox to run debian VMs to run docker compose “stacks”.
Some VMs are dedicated to an entire servicecs docker compose stack.
Some VMs are for a docker compose of a bunch of different services.
Some services are run across multiple nodes with HA VIPs and all that jazz for “guaranteed” uptime.
I see the guest VM as a collection, but there is only ever 1 compose file per host.
Has a bit of overhead, but makes it really easy to reason about, separate VLANs and firewall rules etcWhat is Portainer? You’ve said that it’s a web UI, but what exactly does it provide you with?
Well the webui provides me with a list of containers, whether they’re running or not, the ports that are opened by the containers. There’s Stacks which are basically Docker Compose files in a neat UI. The ability to move these stacks to other instances. There’s the network options and ability to make more networks, the files that are associated with the containers.
And not just for the instance I’m in, but for all the instances I’ve connected.
In my previous experience with Docker these are all things that I need to remember code to find, meaning I most often have to Google the code to find out what I’m after. Here is neatly packaged in a web page.
Oh and the logs, which are really useful when tinkering to try get something up and running
Does portainer, and docker in turn, allow taking/accessing something like point in time snapshots of containers like VM software do? They make it easy to tinker with stuff, knowing that if I mess up, I can go back to a snapshot and be good again.
Not to my knowledge no
Sounds awesome! I’ve taken a look at Portainer and got confused on the whole Business Edition and Community Edition. What are you running?
Community edition. It’s free!
Docker can be many things - and portainer is a nice replacement for those using docker for running services. It’s got a great web interface. For automation and most development docker and compose is my pick. Also a good fit for those that only use X to spawn terminals.
Vs. LXD?