I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.
Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?
It’s very minimal in normal use, maybe like an hour or two a month at most.
If you’re not publicly exposing things? I can go months without touching it. Then go through and update everything in an hour or so on the weekend.
removed by mod
Generally, no. Most of the time the updates work without a hitch. The the exception of Nextcloud, which will always break during an upgrade.
And why I no longer run NC. Every time it would fuck itself to death and I’d have to start from scratch again.
If you set it up really well, you’ll probably only need to invest maybe an hour or so every week or two. But it also depends on what kind of maintenance you mean. I spend a lot of time downloading things and putting them in the right place so that my TV is properly entertaining. Is that maintenance? As for updating things, I’ve set up most of that to be automatic. The stuff that’s not automatic, like pulling new docker images, I do every couple weeks. Sometimes that involves running update scripts or changing configs. Usually it’s just a couple commands.
Yeah, to clarify I don’t mean organizing/arranging files as a part of maintenance, moreso handling different installs/configs/updating. Sometimes since more folks come around to ask for help it can appear as if it’s all much more involved to maintain than it may otherwise be (with a mix of the right setups and knowledge to deal with any hiccups).
Once setup correctly, almost none.
I could spend a lifetime setting up my self hosted stuff correctly.
True, didn’t say that it didn’t take me an eternity to set it up
As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke
Not much for myself, like many others. But my backups are manual. I have an external drive I backup to and unplug as I intentionally want to keep it completely isolated from the network in case of a breach. Because of that, maybe 10 minutes a week? Running gentoo with tons of scripts and docker containers that I have automatically updating. The only time I need to intervene the updates is when my script sends me a push notification of an eselect news item (like a major upcoming update) or kernel update.
I also use a custom monitoring software I wrote that ties into a MySQL db that’s connected to with grafana for general software, network alerts (new devices connecting to network, suspicious DNS requests, suspicious ports, suspicious countries being reached out to like china, etc) or hardware failures (like a raid drive failing)… So yeah, automate if you know how to script or program, and you’ll be pretty much worry free most of the time.
Very little. I have enough redundancy through regular snapshots and offsite backups that I’m confident enough to let Watchtower auto-update most of my containers once a week - the exceptions being pihole and Home Assistant. Pihole gets very few updates anyway, and I tend to skip the mid-month Home Assistant updates so that’s just a once a month thing to check for breaking changes before pushing the button.
Meanwhile my servers’ host OSes are stable LTS distros that require very little maintenance in and of themselves.
Ultimately I like to tinker, but once I’m done tinkering I want things to just work with very little input from me.
It’s bursty; I tend to do a lot of work on stuff when I do a hardware upgrade, but otherwise it’s set it and forget it for the most part. The only servers I pay any significant attention to in terms of frequent maintenance and security checks are the MTAs in the DMZ for my email. Nothing else is exposed to the internet for inbound traffic except a game server VM that’s segregated (credential-wise and network-wise) from everything else, so if it does get compromised it would be a very minimal danger to the rest of my network. Everything either has automated updates, or for servers I want more control over I manually update them when the mood strikes me or a big vulnerability that affects my software hits the news.
TL;DR If you averaged it over a year, I maybe spend 30-60 minutes a week on self hosting maintenance tasks for 4 physical servers and about 20 VM’s.
sometimes I remember I’m self hosting things
As long as you remember before you turn off the computer!
I don’t understand. “Turn… off?”
neofetch proudly displaying 5 months of uptime
my main PC hosts nothing, everything else is always on
+1 automate your backup rolling, setup your monitoring and alerting and then ignore everything until something actually goes wrong. I touch my lab a handful of times a year when it’s time for major updates, otherwise it basically runs itself.
Depends what are you doing. Something like keep base os patched is pretty much nil efforts. Some apps more problematic than others. Home Assistant is always a pain to upgrade and something like postfix is requires nearly 0 maintenance.
Mostly nothing, except for Home Assistant, which seems to shit the bed every few months. My other services are Docker containers or Proxmox LXCs that just work.
New Lemmy Post: How much maintenance do you find your self-hosting involves? (https://lemmyverse.link/lemmy.world/post/14656240)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md
Not heaps, although I should probably do more than I do. Generally speaking, on Saturday mornings:
- Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack then, over coffee when I get up:
- For containers I don’t care about, Watchtower auto-updates them as well, at which point I simply check the service is running and purge the old images
- For mission-critical containers (Pi-hole, Home Assistant, etc), I manually update the containers and verify functionality, before purging old images
- I then check for updates on my OPNsense firewall, and do a controlled update if required (needs me to jump onto a specific wireless SSID to be able to do so)
- Finally, my two internet-facing hosts (Nginx reverse proxy and Wireguard VPN server) auto-update their OS and packages using
unattended-upgrades
, so I test inbound functionality on those
What I still want to do is develop some Ansible playbooks to deploy
unattended-upgrades
across my fleet (~40ish Debian/docker LXCs). I fear I have some tech debt growing on those hosts, but have fallen into the convenient trap of knowing my internet-facing gear is the always up to date, and I can be lazy about the rest.- Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack then, over coffee when I get up:
Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.
My mini-pc with Debian runs RunTipi 24/7 with Navidrome, Jellyfin and Tailscale. Once every 2-3 weeks I plug in the monitor to run updates and add/remove some media.