• 1 Post
  • 42 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle
rss
  • Same here regarding *arrs handling the data movement/layout and nfo files. I even have the “Connect” sections for each set to trigger rescans, but it seems especially for files that get replaced by a more optimal version, a duplicate is left over in kodi alongside the new one which only goes away when you try and play it. I tried switching to a dedicated mysql instance for shits and giggles, no effect. Some day I’ll actually dig in the logs.


  • Yeah that was the tough pill to swallow, moving away from folder based (the old *sonic gang) to tag based navidrome. Not for everyone, but getting your tags in order opens up some nice doors.

    They publish a container image as part of their releases, and you can manage everything with environment variables. If you’re used to running containers I’d say this is even easier for testing and playing around.




  • Maybe you’ve tried it already, but navidrome is a great purpose built music streamer. I was using subsonic back in the day, then airsonic, then airsonic advanced. When I first got on navidrome it was a tough pill to swallow since I never maintained my tags, but I gave a little time here and there to comb through it and in the end it feels like a worthwhile investment. It paid off a little bit more when I adopted lyrion music server and squeeze players for local playback around the home since this organizes by the same tags (mostly), so the whole library is kind of plug and play with things that honor the same tags.


  • It’s not going to randomly disappear your data, but I don’t particularly trust it either. As with anything, keep to a back up strategy. As far as efficiency goes, if you bear in mind it is still a VM but with most of the configuration hidden away for a simpler experience, I would say it is more convenient than a VM under virtualbox or vmware player, especially if you have no need for a full linux desktop environment.






  • Yeah I don’t think anyone sane would disagree. That’s what forced the decision for me, to expose or not. I was not going to try talking anyone through VPN setup, so exposure + whatever hardening practice could be applied. I wouldn’t really advocate for this route, but I like hearing from others doing it because sometimes a useful bit of info or shared experience pops up. The folder path explanation is news to me; time to obfuscate the hell out of that.



  • My automated workflow is to package up backup sources into tars (uncompressed), and encrypt with gpg, then ship the tar.gpg off to backblaze b2 and S3 with rclone. I don’t trust cloud providers so I use two just in case. I’ve not really been in the need for full system backups going off site, rather just the things I’d be severely hurting for if my home exploded.

    But to your main questions, I like gpg because you have good options for encrypting things safely within bash/ash/sh scripting, and the encryption itself is considered strong.

    And, I really like rclone because it covers the main cloud providers and wrangles everything down to an rsync-like experience which also pretty tidy for shell scripting.



  • I do this on the minimal Debian release which is essentially coming from the same place, you’re left to get things configured with a root user or maybe a privileged user after install. There’s a few things to tweak for rootless podman and it will vary based on the distro. The gist for me and Debian is:

    1. make an unprivileged account for running podman containers
    2. enable linger so i can use systemd with this account and the running of the containers
    3. allow lower ports for podman rootless in sysctl (for example, 80 if you’re running basic http services rootless), net.ipv4.ip_unprivileged_port_start=<start of lower range of ports rootless containers will use>
    4. run containers with the appropriate --userns flags. This can vary a lot depending on the container. Some maintainers are nice and ensure the internal uid/gid is something expected like 1000, sometimes not and you have to fire it up and figure out the app account name, uid/gid. An example I’ll put here is a podman run snippet for running jenkins (official image from cloudbees) rootless:

    podman run --name jenkins --user jenkins --userns=keep-id:uid=1000,gid=1000 ...

    Again, that’s just Debian, never tried MicroOS, but if MicroOS isn’t doing anything special to accommodate rootless podman I imagine these steps are somewhat applicable. One issue I ran into was with an older version of Podman, whatever comes with Ubuntu 22: That version of podman requires you to set the namespace mappings; Debian 12’s version does not and the --userns=keep… flag just works.


  • @Getting6409@lemm.eetoSelfhosted@lemmy.worldSharing Jellyfin
    link
    fedilink
    English
    11
    edit-2
    2 months ago

    I expose jellyfin to the internet, and some precautions I have taken that I don’t see mentioned in these answers are: 1) run jellyfin as a rootless container, and 2) use read-only storage where ever possible. If you have other tools managing things like subtitles and metadata files before jellyfin there’s no reason for jellyfin to have write access to the media it hosts. While this doesn’t directly address the documented security flaws with jellyfin, you may as well treat it like a diseased plague rat if you’re going to expose it. To me, that means worst case scenario is the thing is breached and the only thing for an attacker to do is exfiltrate things limited to jellyfin.