The complete guide to building your personal self hosted server for streaming and ad-blocking.

Captain’s note: This OC was originally posted in reddit but its quality makes me wants to ensure a copy survices in lemmy as well.


We will setup the following applications in this guide:

  • Docker
  • AdguardHome - Adblocker for all your devices
  • Jellyfin/Plex - For watching the content you download
  • Qbittorrent - Torrent downloader
  • Jackett - Torrent indexers provider
  • Flaresolverr - For auto solving captcha in some of the indexers
  • Sonarr - *arr service for automatically downloading TV shows
  • Radarr - *arr service for movies
  • Readarr - *arr service for (audio)books
  • lidarr - *arr service for music
  • Bazarr - Automatically downloads subtitles for Sonarr and Radarr
  • Ombi/Overseer - For requesting movies and tv shows through Sonarr and Radarr
  • Heimdall - Dashboard for all the services so you don’t need to remember all the ports

Once you are done, your dashboard will look something like this.

Heimdall Dashboard

I started building my setup after reading this guide https://www.reddit.com/r/Piracy/comments/ma1hlm/the_complete_guide_to_building_your_own_personal/.

Hardware

You don’t need powerful hardware to set this up. I use a decade old computer, with the following hardware. Raspberry pi works fine.

Hardware

Operating system

I will be using Ubuntu server in this guide. You can select whatever linux distro you prefer.

Download ubuntu server from https://ubuntu.com/download/server. Create a bootable USB drive using rufus or any other software(I prefer ventoy). Plug the usb on your computer, and select the usb drive from the boot menu and install ubuntu server. Follow the steps to install and configure ubuntu, and make sure to check “Install OpenSSH server”. Don’t install docker during the setup as the snap version is installed.

Once installation finishes you can now reboot and connect to your machine remotely using ssh.

ssh username@server-ip 
# username you selected during installation
# Type ip a to find out the ip address of your server. Will be present against device like **enp4s0** prefixed with 192.168.

Create the directories for audiobooks, books, movies, music and tv.

I keep all my media at ~/server/media. If you will be using multiple drives you can look up how to mount drives.

We will be using hardlinks so once the torrents are downloaded they are linked to media directory as well as torrents directory without using double storage space. Read up the trash-guides to have a better understanding.

mkdir ~/server
mkdir ~/server/media # Media directory
mkdir ~/server/torrents # Torrents

# Creating the directories for torrents
cd ~/server/torrents
mkdir audiobooks  books  incomplete  movies  music  tv 

cd ~/server/media
mkdir audiobooks  books  movies  music  tv

Installing docker and docker-compose

Docker https://docs.docker.com/engine/install/ubuntu/

# install packages to allow apt to use a repository over HTTPS
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
# Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Setup the repository
echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Add user to the docker group to run docker commands without requiring root
sudo usermod -aG docker $(whoami) 

Sign out by typing exit in the console and then ssh back in

Docker compose https://docs.docker.com/compose/install/

# Download the current stable release of Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Apply executable permissions to the binary
sudo chmod +x /usr/local/bin/docker-compose

Creating the compose file for Adguard home

First setup Adguard home in a new compose file.

Docker compose uses a yml file. All of the files contain version and services object.

Create a directory for keeping the compose files.

mkdir ~/server/compose
mkdir ~/server/compose/adguard-home
vi ~/server/compose/adguard-home/docker-compose.yml

Save the following content to the docker-compose.yml file. You can see here what each port does.

version: '3.3'
services:
    run:
        container_name: adguardhome
        restart: unless-stopped
        volumes:
            - '/home/${USER}/server/configs/adguardhome/workdir:/opt/adguardhome/work'
            - '/home/${USER}/server/configs/adguardhome/confdir:/opt/adguardhome/conf'
        ports:
            - '53:53/tcp'
            - '53:53/udp'
            - '67:67/udp'
            - '68:68/udp'
            - '68:68/tcp'
            - '80:80/tcp'
            - '443:443/tcp'
            - '443:443/udp'
            - '3000:3000/tcp'
        image: adguard/adguardhome

Save the file and start the container using the following command.

docker-compose up -d

Open up the Adguard home setup on YOUR_SERVER_IP:3000.

Enable the default filter list from filters→DNS blocklist. You can then add custom filters.

Filters

Creating the compose file for media-server

Jackett

Jackett is where you define all your torrent indexers. All the *arr apps use the tornzab feed provided by jackett to search torrents.

There is now an *arr app called prowlarr that is meant to be the replacement for jackett. But the flaresolverr(used for auto solving captchas) support was added very recently and doesn’t work that well as compared to jackett, so I am still sticking with jackett for meantime. You can instead use prowlarr if none of your indexers use captcha.

jackett:
    container_name: jackett
    image: linuxserver/jackett
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/jackett:/config'
      - '/home/${USER}/server/torrents:/downloads'
    ports:
      - '9117:9117'
    restart: unless-stopped
prowlarr:
		container_name: prowlarr
    image: 'hotio/prowlarr:testing'
    ports:
      - '9696:9696'
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/prowlarr:/config'
    restart: unless-stopped

Sonarr - TV

Sonarr is a TV show scheduling and searching download program. It will take a list of shows you enjoy, search via Jackett, and add them to the qbittorrent downloads queue.

sonarr:
    container_name: sonarr
    image: linuxserver/sonarr
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    ports:
      - '8989:8989'
    volumes:
      - '/home/${USER}/server/configs/sonarr:/config'
      - '/home/${USER}/server:/data'
    restart: unless-stopped

Radarr - Movies

Sonarr but for movies.

radarr:
    container_name: radarr
    image: linuxserver/radarr
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    ports:
      - '7878:7878'
    volumes:
      - '/home/${USER}/server/configs/radarr:/config'
      - '/home/${USER}/server:/data'
    restart: unless-stopped

Lidarr - Music

lidarr:
    container_name: lidarr
    image: ghcr.io/linuxserver/lidarr
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/liadarr:/config'
      - '/home/${USER}/server:/data'
    ports:
      - '8686:8686'
    restart: unless-stopped

Readarr - Books and AudioBooks

# Notice the different port for the audiobook container
readarr:
    container_name: readarr
    image: 'hotio/readarr:nightly'
    ports:
      - '8787:8787'
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/readarr:/config'
      - '/home/${USER}/server:/data'
    restart: unless-stopped

readarr-audio-books:
    container_name: readarr-audio-books
    image: 'hotio/readarr:nightly'
    ports:
      - '8786:8787'
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/readarr-audio-books:/config'
      - '/home/${USER}/server:/data'
    restart: unless-stopped

Bazarr - Subtitles

bazarr:
    container_name: bazarr
    image: ghcr.io/linuxserver/bazarr
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/bazarr:/config'
      - '/home/${USER}/server:/data'
    ports:
      - '6767:6767'
    restart: unless-stopped

Jellyfin

I personally only use jellyfin because it’s completely free. I still have plex installed because overseerr which is used to request movies and tv shows require plex. But that’s the only role plex has in my setup.

I will talk about the devices section later on.

For the media volume you only need to provide access to the /data/media directory instead of /data as jellyfin doesn’t need to know about the torrents.

jellyfin:
    container_name: jellyfin
    image: ghcr.io/linuxserver/jellyfin
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    ports:
      - '8096:8096'
    devices:
      - '/dev/dri/renderD128:/dev/dri/renderD128'
      - '/dev/dri/card0:/dev/dri/card0'
    volumes:
      - '/home/${USER}/server/configs/jellyfin:/config'
      - '/home/${USER}/server/media:/data/media'
    restart: unless-stopped

plex:
    container_name: plex
    image: ghcr.io/linuxserver/plex
    ports:
      - '32400:32400'
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
      - VERSION=docker
    volumes:
      - '/home/${USER}/server/configs/plex:/config'
      - '/home/${USER}/server/media:/data/media'
    devices:
      - '/dev/dri/renderD128:/dev/dri/renderD128'
      - '/dev/dri/card0:/dev/dri/card0'
    restart: unless-stopped

Overseer/Ombi - Requesting Movies and TV shows

I use both. You can use ombi only if you don’t plan to install plex.

ombi:
    container_name: ombi
    image: ghcr.io/linuxserver/ombi
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/ombi:/config'
    ports:
      - '3579:3579'
    restart: unless-stopped

overseerr:
    container_name: overseerr
    image: ghcr.io/linuxserver/overseerr
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/overseerr:/config'
    ports:
      - '5055:5055'
    restart: unless-stopped

Qbittorrent - Torrent downloader

I use qflood container. Flood provides a nice UI and this image automatically manages the connection between qbittorrent and flood.

Qbittorrent only needs access to torrent directory, and not the complete data directory.

qflood:
    container_name: qflood
    image: hotio/qflood
    ports:
      - "8080:8080"
      - "3005:3000"
    environment:
      - PUID=1000
      - PGID=1000
      - UMASK=002
      - TZ=Asia/Kolkata
      - FLOOD_AUTH=false
    volumes:
      - '/home/${USER}/server/configs/qflood:/config'
      - '/home/${USER}/server/torrents:/data/torrents'
    restart: unless-stopped

Heimdall - Dashboard

There are multiple dashboard applications but I use Heimdall.

heimdall:
    container_name: heimdall
    image: ghcr.io/linuxserver/heimdall
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    volumes:
      - '/home/${USER}/server/configs/heimdall:/config'
    ports:
      - 8090:80
    restart: unless-stopped

Flaresolverr - Solves cloudflare captcha

If your indexers use captcha, you will need flaresolverr for them.

flaresolverr:
    container_name: flaresolverr
    image: 'ghcr.io/flaresolverr/flaresolverr:latest'
    ports:
      - '8191:8191'
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Kolkata
    restart: unless-stopped

Transcoding

As I mentioned in the jellyfin section there is a section in the conmpose file as “devices”. It is used for transcoding. If you don’t include that section, whenever transcoding happens it will only use CPU. In order to utilise your gpu the devices must be passed on to the container.

https://jellyfin.org/docs/general/administration/hardware-acceleration.html Read up this guide to setup hardware acceleration for your gpu.

Generally, the devices are same for intel gpu transcoding.

devices:
      - '/dev/dri/renderD128:/dev/dri/renderD128'
      - '/dev/dri/card0:/dev/dri/card0'

To monitor the gpu usage install intel-gpu-tools

sudo apt install intel-gpu-tools

Now, create a compose file for media server.

mkdir ~/server/compose/media-server
vi ~/server/compose/media-server/docker-compose.yml

And copy all the containers you want to use under services. Remember to add the version string just like adguard home compose file.

Configuring the docker stack

Start the containers using the same command we used to start the adguard home container.

docker-compose up -d

Jackett

Navigate to YOUR_SERVER_IP:9117

Add a few indexers to jackett using the “add indexer” button. You can see the indexers I use in the image below.

Indexers

Qbittorrent

Navigate to YOUR_SERVER_IP:8080

The default username is admin and password adminadmin. You can change the user and password by going to Tools → Options → WebUI

Change “Default Save Path” in WebUI section to /data/torrents/ and “Keep incomplete torrents in” to /data/torrents/incomplete/

Create categories by right clicking on sidebar under category. Type category as TV and path as tv. Path needs to be same as the folder you created to store your media. Similarly for movies type Movies as category and path as movies. This will enable to automatically move the media to its correct folder.

Sonarr

Navigate to YOUR_SERVER_IP:8989

  • Under “Download Clients” add qbittorrent. Enter the host as YOUR_SERVER_IP port as **8080,** and the username and password you used for qbittorrent. In category type TV (or whatever you selected as category name(not path) on qbittorent). Test the connection and then save.
  • Under indexers, for each indexer you added in Jackett
    • Click on add button
    • Select Torzab
    • Copy the tornzab feed for the indexer from jackett
    • Copy the api key from jackett
    • Select the categories you want
    • Test and save
  • Under general, define the root folder as /data/media/tv

Repeat this process for Radarr, Lidarr and readarr.

Use /data/media/movies as root for Radarr and so on.

The setup for ombi/overseerr is super simple. Just hit the url and follow the on screen instructions.

Bazarr

Navigate to YOUR_SERVER_IP:6767

Go to settings and then sonarr. Enter the host as YOUR_SERVER_IP port as 8989. Copy the api key from sonarr settings→general.

Similarly for radarr, enter the host as YOUR_SERVER_IP port as 7878. Copy the api key from radarr settings→general.

Jellyfin

Go to YOUR_SERVER_IP:8096

  • Add all the libraries by selecting content type and then giving a name for that library. Select the particular library location from /data/media. Repeat this for movies, tv, music, books and audiobooks.
  • Go to dashboard→playback, and enable transcoding by selecting as VAAPI and enter the device as /dev/dri/renderD128

Monitor GPU usage while playing content using

sudo intel_gpu_top

Heimdall

Navigate to YOUR_SERVER_IP:8090

Setup all the services you use so you don’t need to remember the ports like I showed in the first screenshot.

Updating docker images

With docker compose updates are very easy.

  • Navigate to the compose file directory ~/server/compose/media-server.
  • Then docker-compose pull to download the latest images.
  • And finally docker-compose up -d to use the latest images.
  • Remove old images by docker system prune -a

What’s next

  • You can setup VPN if torrents are blocked by your ISP/Country. I wanted to keep this guide simple and I don’t use VPN for my server, so I have left out the VPN part.
  • You can read about port forwarding to access your server over the internet.
  • Tiritibambix
    link
    fedilink
    English
    88
    edit-2
    2 years ago

    This is a freaking great guide. I wish I had this wonderful resource when I started selfhosting. Thanks for this.

    People might also want to have a look at pihole as an alternative to adguard for add blocking. It is awesome.

    I prefer homepage over heimdall. It is more configurable, but less noob friendly.

    Jellyseer is a fork of overseer that integrates very well with jellyfin. Reiveer is promising for discovering and adding content.

    • Doink
      link
      fedilink
      English
      42 years ago

      The code base in reiverr is beautiful and svelte kit is amazing.

    • @[email protected]
      link
      fedilink
      English
      22 years ago

      Seconded.

      I found heimdall unreliable and not very lightweight. Considering I essentially just wanted bookmarks it made more sense to switch to an app similar to homepage.

  • @[email protected]
    link
    fedilink
    English
    22 years ago

    Can anyone assist with an issue? I get to this part of the guide: > Save the following content to the docker-compose.yml file.

    I am not quite sure how to “save” it. I tried the :wq thing and it seemed to work? But then when I tried starting the contaner by inputting >docker-compose up -d

    I get >usr/local/bin/docker-compose: line 1: Not: command not found

    I’m stuck at this point. Any tips would be appreciated!

    • lillo
      link
      fedilink
      English
      5
      edit-2
      2 years ago

      Perhaps the manual is a bit outdated. In recent versions of Docker, docker-compose is installed as a plugin for the docker command. So instead of using docker-compose up -d, try using docker compose up -d (note the white space between “docker” and “compose”).

      • @[email protected]
        link
        fedilink
        English
        12 years ago

        That input results this response from the terminal:

        no configuration file provided: not found

    • @[email protected]
      link
      fedilink
      English
      22 years ago

      I had to press ZZ (shift-z twice) to save and exit the compose file before running ‘docker-conpose up -d’

      • @[email protected]
        link
        fedilink
        English
        12 years ago

        That brought me back out of the file writing screen but when I run the docker-compose up -d it tells me the same as before, “/usr/local/bin/docker-compose: line 1: Not: command not found”

        • @[email protected]
          link
          fedilink
          English
          22 years ago

          Double check the code, especially if you copy and pasted anything. Make sure indents use tabs and not spaces. I had a lot of issues like this when copying and pasting compose files.

          I started using nano editor (sudo nano docker-compose.yml) instead and never see this issue. Might be worth a try.

  • @[email protected]
    link
    fedilink
    English
    122 years ago

    Wow. This is great, but man that seems like a lot of points of potential failure. Helpful to have a guide but this remains intimidating to me.

    • @[email protected]
      link
      fedilink
      English
      6
      edit-2
      2 years ago

      Yeah, this is like someone asking “how can I watch free movies” and then someone replying with how to build a business plan to run a company that films and produces movies.

      Just install Plex on any old hardware and open it up to remote access. Get a cheap pfsense firewall and create a forwarding rule. Download torrents with qbtorrent on the box running Plex and have Plex indexing that folder. It doesn’t have to be a million steps of highly complex stuff. Lastly get a pihole set up with one of the many guides out there and you have essentially the same thing as this long, terrifying guide but with was less points of failure.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        2 years ago

        Honestly, I agree with you.

        Of course, if someone offered to come by my house and set this all up for free I wouldn’t say no, but nowadays with a fast enough internet connection and the right tracker you’re a few keystrokes and maybe 15 minutes away from any movie or TV series you might feel like watching.

        The 15/20 minutes saved don’t justify the massive work and maintenance to keep this all running.

      • @[email protected]
        link
        fedilink
        English
        92 years ago

        The solution in this post is more complicated because it does more, and in an automated fashion, than your solution. Yours is simpler to set up, sure, but requires a lot of manual intervention to add content.

        Once you go through the trouble of setting up the more complex solution, it pretty much takes care of itself, content-wise. It’s like your own self hosted Netflix/Hulu/Spotify/whatever else all in one.

        Some people prefer your way, others prefer the more complicated way. I’m certainly glad someone has posted a guide, because either way I’m now aware of the steps involved and how big of an undertaking each solution will be, and can make an informed decision on which setup works for me.

    • lemmyvore
      link
      fedilink
      English
      32 years ago

      You can use the guide to install just Jellyfin and Qbittorrent.

      You’ll have to do what the *arr are doing manually — search torrents yourself and track down each episode etc., then add them to Qbittorrent, then transfer the files to where Jellyfin expects them when they’re done downloading, look for subtitles etc.

      It’s not as nice as the *arr setup because it can’t “take requests”, basically you have to be the one to get the stuff that your friends and family ask for and manage it with Jellyfin… on the other hand it’s much faster to get going — and you can always add *arr stuff later, one by one.

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    I tried a few days ago but

    I couldn’t have docker containers on a separate drive

    And I couldn’t get Jellyfin to attach to my port through docker. The logs didn’t show anything was going wrong though

  • @[email protected]
    link
    fedilink
    English
    6
    edit-2
    2 years ago

    I use minidlna, qbittorrent, qbittorrent-nox on a very old raspberry pi. A 4tb USB hard drive is attached via a powered hub. I can stream 4k Atmos using vlc installed on my “smart” tv. Can’t it be this simple? What’s the reason to dive into docker and plex?

    • @[email protected]
      link
      fedilink
      English
      62 years ago

      Docker eases the automated setup.
      Yours surely does work but docker compose is really nice if you want to have multiple types of one thing on the same hardware (like 2 sonarr/radarr for 4K content).
      Simply impossible with regular installs.
      Also while yes it complicates somethings it also makes maintenance so easy with updates than anything else.
      Remove the image and you are only left with files you put in /path/to/folder.
      Remove a conventional program and you’d need to hunt down the files it created somewhere in the file structures like AppData or /opt and other folders.

          • @[email protected]
            link
            fedilink
            English
            22 years ago

            But they can pull different quality profiles based on your list preferences right? I don’t see why you need one instance for downloading 4k and one for 1080p.

            • @[email protected]
              link
              fedilink
              English
              3
              edit-2
              2 years ago

              Depends on the setup. Maybe you run everything off a raspberry pi and can’t afford to transcode 4k, so you have a separate 4k library for local users only. I could also see wanting to separate the volumes when you have multiple servers attached to a single NAS.

              IDK, I don’t personally bother with 4k, but I imagine it’s a little more to manage if you’re sharing your media out with friends/family.

            • @[email protected]
              link
              fedilink
              English
              12 years ago

              I don’t do it but I read that those are the reason for doing it were having both versions side by side without trumping another and without doing a manual automatic transcode by something like plex/handbrake.
              Or that you would do a separate 4K library so when you share the library with family the fanily will only watch content that fits in the upstream pipe and doesn’t transcode while you could watch crisp 4K content.

        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          2 years ago

          the option to have two instances is nice for maintenance stuff, e.g.

          • you broke your instance and want to migrate to a fresh one
          • you want to try out a new workflow/plugin/fork/version without committing to it

          another benifit of containers:

          • you can do desaster recovery on a single service without affect the other services on the same machine
          • handling incompatable lib-versions becomes trivial. (e.g. two different php versions on the same machine is usually a pain).
    • lemmyvore
      link
      fedilink
      English
      12 years ago

      Plex/Jellyfin is only needed if you need any of its features: remote access, ability to transcode (for reduced bandwidth when remote or when the client device doesn’t support the codec), showing the media as a “library”, search, last watched, ability to fetch information and subtitles about the media, per-user preferences and watch list etc.

      You can also achieve some of these things with local apps like Kodi, Archos Player, BubbleUPnP etc. Or you can just do what you do and play files directly over a file share.

      Docker helps you keep your main system much cleaner and helps with backups and recovering after failures/reinstall/upgrades.

      With Docker the base OS is very basic indeed and just needs some essential things like SSH and Docker installed, so you can use a super reliable OS like Debian stable and not care that it doesn’t have super recent versions of various apps, because you install them from Docker images anyway.

      The OS is not affected by what you install with Docker so it’s very easy to reinstall if needed, and you never have to mess with its packages to get what you want.

      Docker also lets you separate the app files from the persistent files (like configs, preferences, databases etc.) so you can backup the latter separately and preserve them across reinstalls or upgrades of the app.

      Docker also makes it very easy to do things like experiment with a new app, or a new version of an app, or run an app in an environment very unlike your base OS, or get back the exact same environment every time etc. All of these are classic problems when you run apps directly on the OS — if you’ve been doing that for a while you’ve probably run into some issues and know what I mean.

  • @[email protected]
    link
    fedilink
    English
    42 years ago

    Is it possible to do this all on Raspberry Pi OS? I purchased an 8GB RPi 4 and it came with Buster pre-installed. I don’t have any other computer. I have no way of writing Ubuntu onto a micro SD. :/

    • Funderpants
      link
      fedilink
      English
      32 years ago

      I’d like to know this too. I planned to use my laptop as the server , but I have a spare rpi4 that I would prefer.

      • wolfshadowheart
        link
        fedilink
        12 years ago

        Yes, you can use a Pi4 to accomplish the results of this guide, I used a Pi3B+ for a few years without any major issues. However, you will not be able to follow this guide to get it set up, as Pi’s are a different architecture and so you need different images for the initial setup regarding Ubuntu. Mostly everything after that will be the same though.

        Just keep some spare copies of your setup mirror imaged to another SD card once you’re all done and you are golden. Configure your download settings in Sonarr/Radarr to avoid 4k content, that’s the only real limitation of the Pi’s, outside of the SD card lifespan (solved mostly by just not logging).

        @spacecowboy - not being able to write an image will make the Pi4 as a server a biiit more difficult. Do you have an android phone? There’s etchdroid or Pi SD Card imager, which materials to use for can cost under $10 (you’d want the SD card reader that can plug into your phones port, for example). It’s fleeting otherwise, chances are high that you will get it set up and then the SD card will die and you’d be out of luck regardless… If the Pi is your only computer for now, then I’d keep it that way. Either way, I do highly suggest some backup SD cards, they are cheap and you rarely need more than 32gb for the operating system and basic usage - anything with heavy logging or storage should be kept on an external hard drive.

        While it’s possible with an android device, even maybe a library computer with permission for USB devices and temporary downloads would be a good option. It’s really nice to be able to get your server all setup and then make a duplicate of the SD card, which I don’t believe is possible on android. It’s imperative to have a backup since SD cards do have a lifespan, using it as a main server with no backups is putting all your eggs in one basket. All it takes is forgetting to disable logging and the clock starts ticking.

        It’s also nice to be able to test out different operating systems, as you might find that Buster has more overhead than something like DietPi, a command line based OS, as well as being slightly less straightforward for your needs if the Pi is going to be a headless server. But like I said, if you’re using the Pi as a regular computer, DietPi won’t be a viable option since it has no GUI.

        • @[email protected]
          link
          fedilink
          English
          12 years ago

          Thank you for this detailed response. I was able to buy a USB-stick-style SD/mSD reader/writer and a couple of 128gb cards to go with it. I have ubuntu up and running now and a backup as well. I tried following this guide but I keep running into issues around the docker compose part. I think I am in over my head at this point and will just make a local setup the way I know how and try again in the future.

          Thanks for the tips about saving my bacon with multiple SD cards.

    • @[email protected]
      link
      fedilink
      English
      22 years ago

      Yes it is possible, Ive been running simmilar setup on 4 gb model. Raspberry pi is too weak to transcode so you are stuck with direct play - aka you have to download (set in radarr/sonarr) quality that your player supports, but thats easy task for *arr stack.

      You can probably use PC in local library, but you might need to bring your SD card reader. Its good to have option to start over if something goes wrong like SD card dying

      • @[email protected]
        link
        fedilink
        English
        12 years ago

        Thanks for the info! I did end up purchasing a new microSD and USB writer/reader and went to the library, up on Ubuntu now.

    • @[email protected]
      link
      fedilink
      English
      22 years ago

      I’m working on getting this up and running on my pi 4. If I’m successful, I will post a guide

  • @[email protected]
    link
    fedilink
    English
    1
    edit-2
    3 months ago

    Open up the Adguard home setup on YOUR_SERVER_IP:3000.

    OP, could you please explain how to do this? Just typing in the command (yes, I’m entering in my IP address where instructed) results in the response ‘command not found’

    • db0OPM
      link
      fedilink
      English
      13 months ago

      I suspect this is meant to go to your browser

      • @[email protected]
        link
        fedilink
        English
        13 months ago

        I have 1 more question.

        I have multiple drives in my server: 1 drive for the OS and a RAID array for bulk storage and a NAS. At the start of your guide you mentioned that this setup is for a single drive.

        How would the steps of the guide change for a setup like mine? Should I still install everything on the OS drive like you did and point to the RAID array later? Or should all of the docker setup be done on the RAID array at the start?

        • db0OPM
          link
          fedilink
          English
          13 months ago

          Note I haven’t written this guide myself

        • @[email protected]
          link
          fedilink
          English
          13 months ago

          How would the steps of the guide change for a setup like mine? Should I still install everything on the OS drive like you did and point to the RAID array later? Or should all of the docker setup be done on the RAID array at the start?

          This is entirely personal preference as to what order you want to do it. If I’m following a guide I usually make any chances after im done so that I know ive followed the guide correctly first. I

  • @[email protected]
    link
    fedilink
    English
    42 years ago

    As an FYI to anyone trying this, I ran into the following problems and solved them.

    1. Port 53 (DNS) was already bound to systemd-resolved. This caused the Adguard container to fail. https://hub.docker.com/r/adguard/adguardhome From their documentation, do this. I added the commands I did below.

    sudo mkdir /etc/systemd/resolved.conf.d sudo touch /etc/systemd/resolved.conf.d/adguardhome.conf sudo nano /etc/systemd/resolved.conf.d/adguardhome.conf #Copy this in and save [Resolve] DNS=127.0.0.1 DNSStubListener=no

    1. DHCP on the interface I was using on my VM was already bound to DHCP. To resolve this, set a static IP. I used the following. sudo nano /etc/netplan/00-installer-config.yaml

    #Overwrite with the following. Make sure if your adapter isn’'t labeled ens33, you change it appropriately. network: renderer: networkd ethernets: ens33: addresses: - 192.168.1.200/24 nameservers: addresses: [192.168.1.1] routes: - to: default via: 192.168.1.1 version: 2

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      4 months ago

      When I try to run "sudo nano /etc/systemd/resolved.conf.d/adguardhome.conf " it opens up GNU nano 7.2 and tells me at the bottom of the screen that the file is actually a directory and refuses to let me copy/paste anything into it.

      EDIT: Looks like the issue for me is with nano. It’s trying to make this a directory instead of a file. I’m able to get it working with vi. If anyone is haveing the same issue, you have to delete the “file” you made with nano, then make a new one with vi before it’ll work.

      That being said, the last portion regarding the DHCP conflict also isn’t working, probably due to the formatting not being specified.

      EDIT2: Looks like the real issue for fixing port DNS port 68 (the DHCP conflict) is a bit more complicated. There’s multiple different possibilities for what the file you need to modify can be named.

      Personally, the solution I’m going with is to just disable port 68 for adguard. According to this source, the only downside is having your router handle DHCP, which I’m fine with at the moment. The source I posted refers to port 67, but it works for port 68 as well.

      If anyone reading this would prefer to let Adguard use port 68 by setting up a static IP address, This guide is more detailed and also includes some of the variances in filenames you might come across to better solve the problem for your setup.

    • Blxter
      link
      fedilink
      English
      21 year ago

      Hey so I know this is really old but I have been running my adguard home on a raspi for a while now but am trying to move it over to run with everthing else. The only problem is that whenever I set the “DNSStubListener=no” it breaks all the API things for homepage and overseerr/tautuli etc do you know of a way to fix?

  • @[email protected]
    link
    fedilink
    English
    132 years ago

    Wow, this is so detailed.

    I was looking into setting up some stuff because it seems like a fun project to me, but it was very daunting.

    Having it all here definitely helps a lot and gives me motivation to start.

  • @[email protected]
    link
    fedilink
    English
    8
    edit-2
    2 years ago

    Nice guide! However, I’ve always wondered if all of these even make sense. Like, can’t you just stream from the internet? I understand having thing on your physical storage device is an extra degree of freedom but it’s very rare for me watching something more then once. Also while you can technically run it off a Raspberry Pi, it’s not really recommended and you would need a separate PC which just adds to the cost. Meanwhile, with a simple app like Cloudstream, you can just get whatever you want whenever you want. The only advantage I see of the *arr +media server approach is not needing to connect to a VPN.

    EDIT: After reading the replys just realized I should have specified by streaming sites I mean the shady ones, in my country we use different words and I see how that can confuse some people

    • wolfshadowheart
      link
      fedilink
      22 years ago

      It’s all about use case. You don’t rewatch shows or movies, so maybe storing media isn’t for you. I’m often rewatching or just having background stuff playing, so it’s nice having it available.

      On top of that, I was tired of streaming services removing content. Netflix removing It’s Always Sunny actually got me started, and the subsequent removal of episodes from shows confirmed I made the right choice. I actually have control over my media, in that I can put a playlist of any number of shows together I want.

      I have playlists for 70’s-80’s shows like The Brady Bunch, The A-Team, Knight Rider, just hit shuffle and it’s 1,000 episodes of random nostalgia. I can set up programs like DizqueTV and set up my own TV channels on top of this. Why pick and choose a show when TV can pick for me?

      In regards to “the hardware” I ran my Plex server on a Pi3 for years. Unless you’re pushing 4k content or certain filetypes, the Pi is more than enough.

      In addition to all this, I’m not reliant on my internet. If power goes out partially, I still have access to my hard drives and have always been able to pop on a show or movie while I clean up in the dark. Or sometimes the internet just goes out and it’s really nice being unaffected.

      I think it’s been 7 or 8 years since I started in college, I’ve spent about $600 total on hard drives that I’m still using today? The money I’ve spent is invested into my server, rather than paying some service for something I can do myself. A service that has to submit to the will of the government, I was curious of the price range of Cloudstream and saw that they took the site and code down, so it’s just another streaming situation that’s no different, except the chance of payment being sent to the actual people who worked on the show is now completely gone. Even just $30/month after 5 years is $1,800.

      I pirate content because I can’t trust Netflix/Hulu/Disney to not fuck with their content. So why would I pay another 3rd party to do the same thing? Moreover, when I subscribe to these streaming services I can contribute to the metrics to say, “Hey, I want more It’s Always Sunny after S14!”.

      Finally - it’s a hobby as well. I like computers. Linux another the shit out of me but I’ve enjoyed setting up a server used for more than just media. On the Pi I would just search for what I wanted and add it as I see fit. Obviously, there’s the *arrs as well which can get it all automated for you. That’s a bit of setup on its own, but it’s fairly straightforward.

    • @[email protected]
      link
      fedilink
      English
      82 years ago
      1. You will probably not reach the level of quality with something like a pirate hoster.
        Either the content will have a lower bitrate or lower resolution.
      2. Foreign dub is hard to come by in decent quality.
      3. Yes storage and compute is surely more expensive but for some it’s a hobby and a learning experience
    • @[email protected]
      link
      fedilink
      English
      152 years ago

      I used to be in your camp, but then switched to plex setup etc.

      Main reasons:

      1. I’m seeing the trend of media being removed from people and I’m getting sick of it. I want my shit to be mine and available to me at a moments notice.

      2. My collection basically exists if all top movies / shows that I can rotate watching.

      3. It makes it so that my tech illiterate family can enjoy everything too without knowing how anything works.

      4. I could cancel all those greedy corporate assholes splitting everything into a thousand services.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        2 years ago

        not discrediting you, this is just my point of view. Media being removed is in not really a problem on streaming sites since there’s usually many where you can watch the same thing, and as for point 4 streaming sites are basically the same.

        I guess it’s just different usage because I don’t really like rewatching things and my family doesn’t usually watch movies/TV series.

        So in the end the only thing I don’t like with how I do it is not being able to physically have the files

        EDIT: I just realized I should have specified by streaming sites I mean the shady ones, in my country we use different words

    • @[email protected]
      link
      fedilink
      English
      52 years ago

      Personally I just think it’s easier to pick out the movies and shows I want to watch, and then be sure that they will be there once I sit down to watch them. No uncertainty, no hunting down a good stream or missing episode, everything is just there and ready. The process is very simple once everything is set up, and you can still delete video files after you watch them if you want to.

    • @[email protected]
      link
      fedilink
      English
      92 years ago
      • You can’t actually own movies anymore unless you buy physical copies (which are subject to damage over time).
      • You’re dependent on someone else’s servers to stream the movies.
      • The providers can and have removed movies you’ve paid for.
      • Not dependent on your internet connection, which can be unreliable for many.
      • @[email protected]
        link
        fedilink
        English
        32 years ago

        I meant free streaming sites with reuploads, but the other point still stand strong, thanks

    • archomrade [he/him]
      link
      fedilink
      English
      82 years ago

      The nature of pirating means that specific media/torrents/indexes/domains are frequently down or unavailable. A solution today might be taken down or raided by authorities tomorrow.

      It’s just a little more stable/redundant to have media stored locally. Plus, by streaming from something like cloud stream, you’re not contributing to torrent seeding, not to mention that a turnkey solution is a large target for authorities, so it’s possible if not likely that it’ll stop working someday and you’ll have to find something else.

      It’s not for everyone certainly, but if you can afford it it’s a nice solution.