I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.

Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    If you set it up really well, you’ll probably only need to invest maybe an hour or so every week or two. But it also depends on what kind of maintenance you mean. I spend a lot of time downloading things and putting them in the right place so that my TV is properly entertaining. Is that maintenance? As for updating things, I’ve set up most of that to be automatic. The stuff that’s not automatic, like pulling new docker images, I do every couple weeks. Sometimes that involves running update scripts or changing configs. Usually it’s just a couple commands.

    • @[email protected]OP
      link
      fedilink
      English
      21 year ago

      Yeah, to clarify I don’t mean organizing/arranging files as a part of maintenance, moreso handling different installs/configs/updating. Sometimes since more folks come around to ask for help it can appear as if it’s all much more involved to maintain than it may otherwise be (with a mix of the right setups and knowledge to deal with any hiccups).

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    Since scrapping systemd, a hell of a lot less but it can occasionally be a bit of messing about when my dynamic ip gets reassigned.

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    After my Nextcloud server just killed itself from an update and I ditched that junk software, nearly zero maintenance.

    I have

    • autoupdates on.
    • daily borgbackups to hetzner storage box.
    • auto snapshots of the servers and hetzer.
    • cloud-init scripts ready for any of the servers.
    • Xpipe for management
    • keepass as a backup for all the ssh keys and password

    And I have never used any of those … it just runs and keeps running.

    I am selfhosting

    • a website
    • a booking service for me
    • caldav server
    • forgejo
    • opengist
    • jitsi

    I need to setup some file sharing thing (Nextcloud replacement) but I am not sure what. My usecase is mainly 1) Archiving junk 2) syncing files between three devices 3) streaming my music collection

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      I moved form next cloud to seafile. The file sync is so much better than next cloud and own cloud.

      It has a normal windows client and also a mount type client (seadrive) which is also amazing for large libraries.

      I have mine setup with oAuth via Authentik and it works super well.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        I actually moved from seafile to nextcloud, because when I have two PCs running simultaneously it would constantly have sync errors and required manually resolving them all the time. Sadly nextcloud wasn’t really better. But I am now looking for solutions that can avoid file conflicts with two simultaneous clients.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Are you changing the same files at the same time?

          I have multiple computers syncing into the same library all the time without issue.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            1 year ago

            Are you changing the same files at the same time?

            Rarely. But there is some offline laptop use compounded with slow sync times. (I was running it on a raspi with external usb hdd enclosure)

            Either way, I’d like something less fragile. I’ll test seafile again sometime, thanks.

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    Very little. I have enough redundancy through regular snapshots and offsite backups that I’m confident enough to let Watchtower auto-update most of my containers once a week - the exceptions being pihole and Home Assistant. Pihole gets very few updates anyway, and I tend to skip the mid-month Home Assistant updates so that’s just a once a month thing to check for breaking changes before pushing the button.

    Meanwhile my servers’ host OSes are stable LTS distros that require very little maintenance in and of themselves.

    Ultimately I like to tinker, but once I’m done tinkering I want things to just work with very little input from me.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    Very little. Thanks to Docker + Watchtower I don’t even have to check for updates to software. Everything is automatic.

  • @[email protected]
    link
    fedilink
    English
    31 year ago

    If my ISP didn’t constantly break my network from their side, I’d have effectively no downtime and nearly zero maintenance. I don’t live on the bleeding edge and I don’t do anything particularly experimental and most of my containers are as minimal as possible

    I built my own x86 router with OpnSense Proxmox hypervisor Cheapo WiFi AP Thinkcentre NAS (just 1 drive, debian with Samba) Containers: Tor relay, gonic, corrade, owot, apache, backups, dns, owncast

    All of this just works if I leave it alone

  • @[email protected]
    link
    fedilink
    English
    71 year ago

    For some reason my DNS tends to break the most. I have to reinstall my Pi-hole semi-regularly.

    NixOS plus Docker is my preferred setup for hosting applications. Sometime it is a pain to get running but once it does it tends to run. If a container doesn’t work, restart it. If the OS doesn’t work, roll it back.

  • Max-P
    link
    fedilink
    English
    36
    edit-2
    1 year ago

    Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.

    Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?

    I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.

    • Footnote2669
      link
      fedilink
      English
      41 year ago

      +1 for docker and minimal maintenance. Only updates or new containers might break stuff. If you don’t touch it, it will be fine. Of course there might be some container specific problems. Depends what you want to run. And I’m not a devops engineer like Max 😅

    • MBV ⚜️
      link
      fedilink
      English
      11 year ago

      Same same - just one update a week on Friday btw 2 yawns of the 4VMs and 10-15 services i have + quarterly backup. Does not involve much + the odd ad-hoc re-linking the reverse proxy when containers switch ips on the docker network when the VM restarts/resets

  • @[email protected]
    link
    fedilink
    English
    5
    edit-2
    1 year ago

    Not heaps, although I should probably do more than I do. Generally speaking, on Saturday mornings:

    • Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack then, over coffee when I get up:
      • For containers I don’t care about, Watchtower auto-updates them as well, at which point I simply check the service is running and purge the old images
      • For mission-critical containers (Pi-hole, Home Assistant, etc), I manually update the containers and verify functionality, before purging old images
    • I then check for updates on my OPNsense firewall, and do a controlled update if required (needs me to jump onto a specific wireless SSID to be able to do so)
    • Finally, my two internet-facing hosts (Nginx reverse proxy and Wireguard VPN server) auto-update their OS and packages using unattended-upgrades, so I test inbound functionality on those

    What I still want to do is develop some Ansible playbooks to deploy unattended-upgrades across my fleet (~40ish Debian/docker LXCs). I fear I have some tech debt growing on those hosts, but have fallen into the convenient trap of knowing my internet-facing gear is the always up to date, and I can be lazy about the rest.

  • Presi300
    link
    fedilink
    English
    2
    edit-2
    1 year ago

    I just did a big upgrade to my “home lab” (got a new switch and moved it out of my bedroom), which required some maintenance in the days after the upgrade… Running a new ethernet cable, because the old one just couldn’t heck doing gigabit, reconfiguring my router and AP, just general stuff like that.

    Other than that and my DHCP/DNS VM sometimes forgetting to autostart after a power outage, pretty much 0 maintenance

  • @[email protected]
    link
    fedilink
    English
    4
    edit-2
    1 year ago

    I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

    Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

    So -

    • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
    • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
    • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
    • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
    • Yearly: visit the remotes and have a proper check/clean up/updates
  • Mikelius
    link
    fedilink
    English
    31 year ago

    Not much for myself, like many others. But my backups are manual. I have an external drive I backup to and unplug as I intentionally want to keep it completely isolated from the network in case of a breach. Because of that, maybe 10 minutes a week? Running gentoo with tons of scripts and docker containers that I have automatically updating. The only time I need to intervene the updates is when my script sends me a push notification of an eselect news item (like a major upcoming update) or kernel update.

    I also use a custom monitoring software I wrote that ties into a MySQL db that’s connected to with grafana for general software, network alerts (new devices connecting to network, suspicious DNS requests, suspicious ports, suspicious countries being reached out to like china, etc) or hardware failures (like a raid drive failing)… So yeah, automate if you know how to script or program, and you’ll be pretty much worry free most of the time.